Skip to content

Conversation

DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Apr 28, 2025

Introduce ProcessingCache.get_item to simultaneously get the hash and the corresponding multi-modal item from the cache, so that we don't have to recompute the hash again in BaseMultiModalProcessor.apply.

Notes:

  • I intentionally omitted debug logging in ProcessingCache.get_item to avoid merge conflict with [Metrics] Log multi-modal cache stats #16478 .
  • I also factored out some steps in BaseMultiModalProcessor._cached_apply_hf_processor and EncDecMultiModalProcessor.apply to promote code reuse.
  • The enable_sanity_checks parameter to BaseMultiModalProcessor.__init__ has been removed as it's not that useful.

@DarkLight1337 DarkLight1337 requested a review from Isotr0py April 28, 2025 16:07
@DarkLight1337 DarkLight1337 requested a review from ywang96 as a code owner April 28, 2025 16:07
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the multi-modality Related to multi-modality (#4194) label Apr 28, 2025
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 changed the title [VLM] Compute the hash only once per item [VLM] Compute multimodal hash only once per item Apr 28, 2025
@DarkLight1337 DarkLight1337 changed the title [VLM] Compute multimodal hash only once per item [Optimizatino] Compute multimodal hash only once per item Apr 28, 2025
@DarkLight1337 DarkLight1337 changed the title [Optimizatino] Compute multimodal hash only once per item [Optim] Compute multimodal hash only once per item Apr 28, 2025
Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM!

@Isotr0py Isotr0py added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 28, 2025
@DarkLight1337 DarkLight1337 merged commit 506475d into vllm-project:main Apr 29, 2025
66 checks passed
@DarkLight1337 DarkLight1337 deleted the hash-once branch April 29, 2025 01:40
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants